Skip to content

Conversation

@rschalo
Copy link
Contributor

@rschalo rschalo commented Mar 24, 2025

Fixes #N/A

Description

Improve the performance of single-node consolidation by caching requirements.

How was this change tested?
Two deployments were applied to a cluster with pod anti-affinity rules selecting for self and topology key: hostname. The deployments also had requests that would fit two pods per node and after scaling to 1000 replicas, the result is 1000 dataplane nodes. These are the results of how many candidates can be considered for multi- and single-node consolidations.

Two deployments like:

    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: pause
            topologyKey: kubernetes.io/hostname
      containers:
      - image: registry.k8s.io/pause:3.9
        imagePullPolicy: IfNotPresent
        name: pause
        resources:
          limits:
            memory: 1Gi
          requests:
            cpu: 350m
            memory: 512Mi

NodePool:

spec:
  disruption:
    budgets:
    - nodes: 100%
    consolidateAfter: 0s
    consolidationPolicy: WhenEmptyOrUnderutilized
  template:
    spec:
      expireAfter: 720h
      nodeClassRef:
        group: karpenter.kwok.sh
        kind: KWOKNodeClass
        name: default
      requirements:
      - key: kubernetes.io/arch
        operator: In
        values:
        - arm64
      - key: kubernetes.io/os
        operator: In
        values:
        - linux
      - key: karpenter.sh/capacity-type
        operator: In
        values:
        - on-demand
      - key: karpenter.kwok.sh/instance-cpu
        operator: Lt
        values:
        - "2"
      - key: karpenter.kwok.sh/instance-family
        operator: In
        values:
        - c
        - m
        - r

KWOKNodeClass

apiVersion: v1
items:
- apiVersion: karpenter.kwok.sh/v1alpha1
  kind: KWOKNodeClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"karpenter.kwok.sh/v1alpha1","kind":"KWOKNodeClass","metadata":{"annotations":{},"name":"default"},"spec":{"nodeRegistrationDelay":"20s"}}
    creationTimestamp: "2025-03-12T20:18:20Z"
    generation: 2
    name: default
    resourceVersion: "9421391"
    uid: 61224b65-a752-43ae-b2c2-92160375eabc
  spec:
    nodeRegistrationDelay: 20s
  status:
    conditions:
    - lastTransitionTime: "2024-01-01T01:01:01Z"
      message: ""
      reason: Ready
      status: "True"
      type: Ready
kind: List
metadata:
  resourceVersion: ""

All other consolidation methods disabled except for the one being tested, Karpenter pod with 1cpu-1gb was used:

In single-node consolidation by ~65% averaged across three runs:
before

{"level":"DEBUG","time":"2025-03-27T00:48:25.035Z","logger":"controller","caller":"disruption/controller.go:184","message":"abandoning single-node consolidation due to timeout after evaluating 36 candidates"
{"level":"DEBUG","time":"2025-03-27T00:49:37.163Z","logger":"controller","caller":"disruption/controller.go:184","message":"abandoning single-node consolidation due to timeout after evaluating 47 candidates"
{"level":"DEBUG","time":"2025-03-27T00:50:50.391Z","logger":"controller","caller":"disruption/controller.go:184","message":"abandoning single-node consolidation due to timeout after evaluating 48 candidates"
image

after

{"level":"DEBUG","time":"2025-03-27T00:38:35.840Z","logger":"controller","caller":"disruption/controller.go:184","message":"abandoning single-node consolidation due to timeout after evaluating 53 candidates"
{"level":"DEBUG","time":"2025-03-27T00:39:48.544Z","logger":"controller","caller":"disruption/controller.go:184","message":"abandoning single-node consolidation due to timeout after evaluating 75 candidates"
{"level":"DEBUG","time":"2025-03-27T00:41:00.340Z","logger":"controller","caller":"disruption/controller.go:184","message":"abandoning single-node consolidation due to timeout after evaluating 73 candidates"

No OOMing with 1cpu-1gb
image

In multi-node consolidation by ~38% averaged across five runs:
before

{"level":"DEBUG","time":"2025-03-27T00:56:29.330Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates, took: 10.688790815 seconds",
{"level":"DEBUG","time":"2025-03-27T00:56:55.932Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates, took: 13.289226584 seconds"
{"level":"DEBUG","time":"2025-03-27T00:57:22.942Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates, took: 14.119292574 seconds"
{"level":"DEBUG","time":"2025-03-27T00:57:44.783Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates, took: 10.159742381 seconds"
{"level":"DEBUG","time":"2025-03-27T00:58:03.156Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates, took: 6.980654512 seconds"
image

after

{"level":"DEBUG","time":"2025-03-27T01:01:53.736Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates,  took: 7.780538216 seconds"
{"level":"DEBUG","time":"2025-03-27T01:02:13.843Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates,  took: 8.360195179 seconds"
{"level":"DEBUG","time":"2025-03-27T01:02:34.332Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates,  took: 7.608574832 seconds"
{"level":"DEBUG","time":"2025-03-27T01:02:51.346Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates,  took: 4.714366802 seconds"
{"level":"DEBUG","time":"2025-03-27T01:03:09.737Z","logger":"controller","caller":"disruption/multinodeconsolidation.go:84","message":"stopping multi-node consolidation after 100 candidates,  took: 5.992297541 seconds"
image

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: rschalo
Once this PR has been reviewed and has the lgtm label, please assign gjtempleton for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested a review from engedaam March 24, 2025 21:04
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Mar 24, 2025
@k8s-ci-robot k8s-ci-robot requested a review from tallaxes March 24, 2025 21:04
@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 24, 2025
@coveralls
Copy link

Pull Request Test Coverage Report for Build 14045879893

Details

  • 31 of 31 (100.0%) changed or added relevant lines in 11 files are covered.
  • 2 unchanged lines in 1 file lost coverage.
  • Overall coverage increased (+0.1%) to 81.693%

Files with Coverage Reduction New Missed Lines %
pkg/controllers/disruption/drift.go 2 89.83%
Totals Coverage Status
Change from base Build 14001181981: 0.1%
Covered Lines: 9670
Relevant Lines: 11837

💛 - Coveralls

@rschalo rschalo marked this pull request as draft March 25, 2025 14:57
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 25, 2025
@rschalo rschalo marked this pull request as ready for review March 27, 2025 01:06
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 27, 2025
@jonathan-innis
Copy link
Member

/assign @jmdeal


// StateNodeLabelRequirements returns the scheduling requirements for the state nodes labels. This is safe to cache
// as we don't modify these requirements and the state nodes won't change during a consolidation pass.
func (c *SimulationCache) StateNodeLabelRequirements(n *state.StateNode) scheduling.RequirementsReadOnly {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused: We pass a scheduler cache into our scheduler, but this scheduler cache only stores data from a single node?

return np.Name, corev1.ResourceList(np.Spec.Limits)
}),
clock: clock,
cache: cache,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we already have a pod cache -- does it make sense to share this cache across both node requirements and pod requirements so that we keep a single cache and just have different data in it?

@jonathan-innis
Copy link
Member

/assign @jonathan-innis

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 11, 2025
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@talayalon
Copy link

We are having issues with consolidating nodes in a large EKS cluster (>1000 nodes).
We keep seeing nodes marked to consolidate and than a few minutes later removing it.
Could this PR be related to this weird behaviour?

@rschalo rschalo closed this May 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants